Feedback tests for async and sync profiles#23
Conversation
d347490 to
6ad7297
Compare
ea9738b to
cafac1b
Compare
3d95732 to
ffe93a9
Compare
JJsudo7
left a comment
There was a problem hiding this comment.
Hi Vishwas,
Looks good overall.
Review comments/suggestions have been provided. Please check.
Thanks!
| profile.delete_feedback(prompt_spec=(prompt, action)) | ||
| logger.info("Retrieving prompt after deletion to verify removal") | ||
| show_prompt = profile.show_prompt(DUMMY_PROMPT) | ||
| assert response not in show_prompt |
There was a problem hiding this comment.
showprompt is a good downstream validation for response parameter.
But for feedback_content parameter, we might upstream validation of feedback index table. Is it possible to add?
|
|
||
|
|
||
| def test_4004(profile): | ||
| """CASE : Negative feedback test with SHOWSQL ordering by name""" |
There was a problem hiding this comment.
Could you highlight difference btw test test case 4001 and 4004?
Also, negative coverage for non supported actions like NARRATE, CHAT required in Python?
There was a problem hiding this comment.
might not be needed as it is delegating call to dbms_cloud_ai.feedback
| ) | ||
| feedback_content = "print in descending order of total_points" | ||
| logger.info( | ||
| "Expecting DatabaseError when adding negative feedback with prompt=%s action=%s sql_id=%s", |
There was a problem hiding this comment.
When providing SQL_ID and SQL_TEXT we can have cases -
- SQL_ID and SQL_TEXT match - Expected: Feedback addition passes
- SQL_ID and SQL_TEXT mismatch - Expected error "ORA-20000: SQL_ID does not match with the SQL text"
Please check if these cases are covered?
Test 4007 is case 2 since error expected?
There was a problem hiding this comment.
- Both sql_id and sql_text are not supported in the function . This behavour is consistent with dbms_cloud_ai.feedback.
- covered in new commit
| feedback_content = "print in descending order of total_points" | ||
| logger.info("Adding negative feedback for prompt=%s action=%s", prompt, action) | ||
|
|
||
| profile.add_negative_feedback( |
There was a problem hiding this comment.
I see another add_negative_feedback() call in this test case with SQL_ID. Would this be needed?
| assert prompt in show_prompt | ||
|
|
||
| with pytest.raises(oracledb.DatabaseError) as exc_info: | ||
| profile.add_positive_feedback( |
There was a problem hiding this comment.
Can we have logging for add_positive_feedback?
|
|
||
|
|
||
| def test_4013(profile): | ||
| """CASE : SQL_TEXT DOES NOT MATCH ANY EXISTING FEEDBACK SQL_TEXT""" |
There was a problem hiding this comment.
Please have similar test case for Negative Feedback.
Txn lauzhao_bug-38415980 introduced negative feedback addition without need of existing SQL_TEXT. Hence, negative feedback addition with non-existing SQL_TEXT should pass.
Please refer txn joejos_rti-32183121 for the additional tests I have added for this.
| assert response not in show_prompt | ||
| assert prompt not in show_prompt | ||
|
|
||
|
|
There was a problem hiding this comment.
Please check if we have test case for feedback deletion with SQL_ID?
|
|
||
|
|
||
| async def test_4119(profile): | ||
| """CASE : SQL_TEXT DOES NOT MATCH ANY EXISTING FEEDBACK SQL_TEXT""" |
| sql_id = "sql_id_mismatch" | ||
|
|
||
| with pytest.raises(oracledb.DatabaseError) as exc_info: | ||
| profile.add_positive_feedback(sql_id=sql_id) |
There was a problem hiding this comment.
Please check if we have positive feedback with valid SQL_ID?
| feedback_content=feedback_content, | ||
| ) | ||
| logger.info("Retrieving prompt via show_prompt for dummy input") | ||
| show_prompt = await profile.show_prompt(dummyPrompt) |
There was a problem hiding this comment.
Please log show_prompt for LRG integration.
| prompt_spec=(prompt, action) | ||
| ) | ||
| logger.info("Retrieving prompt after deletion to verify removal") | ||
| show_prompt = await profile.show_prompt(dummyPrompt) |
| feedback_content=feedback_content, | ||
| ) | ||
| logger.info("Retrieving prompt via show_prompt for dummy input") | ||
| show_prompt = await profile.show_prompt(dummyPrompt) |
There was a problem hiding this comment.
Please make sure to add additional logging as applicable.
| logger.info("Deleting feedback for prompt=%s action=%s", prompt, action) | ||
| await profile.delete_feedback( | ||
| prompt_spec=(prompt, action) | ||
| ) |
There was a problem hiding this comment.
Verify if prompt is removed ?
There was a problem hiding this comment.
That is being verified using assert statements
| prompt_spec=(prompt, action), | ||
| response=response, | ||
| feedback_content=feedback_content, | ||
| ) |
There was a problem hiding this comment.
Is there a way to confirm is feedback is added ?
There was a problem hiding this comment.
yes added logging for feedback table
8aa01b2 to
dfe772b
Compare
Vishwas-testing
left a comment
There was a problem hiding this comment.
Used cursor for tests as acknowledged by abhishek
Feedback tests for async and sync profiles Adding feedback logging address comments reviewed comments comments correction
b9547c8 to
b20850f
Compare
| prompt = PROMPT | ||
| action = Action.SHOWSQL | ||
| sql_id = SHOWSQL_SQL_ID | ||
| feedback_content = "print in ascending order of total_points" |
There was a problem hiding this comment.
In dbms_cloud_ai.feedback, providing negative feedback for existent SQL with just feedback content would yield LLM interaction and pass.
Please check if the disparity in behaviour btw python and SQL expected?
There was a problem hiding this comment.
sure i will check and file a bug if needed
Description
Feedback Select AI tests
Test only transaction